94 research outputs found

    BATUD: Blind Atmospheric TUrbulence Deconvolution

    Get PDF
    A new blind image deconvolution technique is developed for atmospheric turbulence deblurring. The originality of the proposed approach relies on an actual physical model, known as the Fried kernel, that quantifies the impact of the atmospheric turbulence on the optical resolution of images. While the original expression of the Fried kernel can seem cumbersome at first sight, we show that it can be reparameterized in a much simpler form. This simple expression allows us to efficiently embed this kernel in the proposed Blind Atmospheric TUrbulence Deconvolution (BATUD) algorithm. BATUD is an iterative algorithm that alternately performs deconvolution and estimates the Fried kernel by jointly relying on a Gaussian Mixture Model prior of natural image patches and controlling for the square Euclidean norm of the Fried kernel. Numerical experiments show that our proposed blind deconvolution algorithm behaves well in different simulated turbulence scenarios, as well as on real images. Not only BATUD outperforms state-of-the-art approaches used in atmospheric turbulence deconvolution in terms of image quality metrics, but is also faster

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    How to compare noisy patches? Patch similarity beyond Gaussian noise

    No full text
    International audienceMany tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning. By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications

    Template Matching with Noisy Patches: A Contrast-Invariant GLR Test

    No full text
    International audienceMatching patches from a noisy image to atoms in a dictionary of patches is a key ingredient to many techniques in image processing and computer vision. By representing with a single atom all patches that are identical up to a radiometric transformation, dictionary size can be kept small, thereby retaining good computational efficiency. Identification of the atom in best match with a given noisy patch then requires a contrast-invariant criterion. In the light of detection theory, we propose a new criterion that ensures contrast invariance and robustness to noise. We discuss its theoretical grounding and assess its performance under Gaussian, gamma and Poisson noises

    Patch similarity under non Gaussian noise

    No full text
    International audienceMany tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared intensity differences. When the noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature. We review seven of those criteria taken from the fields of image processing, detection theory and machine learning. We discuss their theoretical grounding and provide a numerical comparison of their performance under Gamma and Poisson noises

    COMBINING PATCH-BASED ESTIMATION AND TOTAL VARIATION REGULARIZATION FOR 3D INSAR RECONSTRUCTION

    No full text
    International audienceIn this paper we propose a new approach for height retrieval using multi-channel SAR interferometry. It combines patch-based estimation and total variation regularization to provide a regularized height estimate. The non-local likelihood term adaptation relies on NL-SAR method, and the global optimization is realized through graph-cut minimization. The method is evaluated both with synthetic and real experiments

    Risk estimation for matrix recovery with spectral regularization

    Full text link
    In this paper, we develop an approach to recursively estimate the quadratic risk for matrix recovery problems regularized with spectral functions. Toward this end, in the spirit of the SURE theory, a key step is to compute the (weak) derivative and divergence of a solution with respect to the observations. As such a solution is not available in closed form, but rather through a proximal splitting algorithm, we propose to recursively compute the divergence from the sequence of iterates. A second challenge that we unlocked is the computation of the (weak) derivative of the proximity operator of a spectral function. To show the potential applicability of our approach, we exemplify it on a matrix completion problem to objectively and automatically select the regularization parameter.Comment: This version is an update of our original paper presented at ICML'2012 workshop on Sparsity, Dictionaries and Projections in Machine Learning and Signal Processin

    PATCH-BASED SAR IMAGE CLASSIFICATION: THE POTENTIAL OF MODELING THE STATISTICAL DISTRIBUTION OF PATCHES WITH GAUSSIAN MIXTURES

    No full text
    International audienceDue to their coherent nature, SAR (Synthetic Aperture Radar) images are very different from optical satellite images and more difficult to interpret, especially because of speckle noise. Given the increasing amount of available SAR data, efficient image processing techniques are needed to ease the analysis. Classifying this type of images, i.e., selecting an adequate label for each pixel, is a challenging task. This paper describes a supervised classification method based on local features derived from a Gaussian mixture model (GMM) of the distribution of patches. First classification results are encouraging and suggest an interesting potential of the GMM model for SAR imaging
    corecore